15 resultados para High throughput nucleotide sequencing

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A post-complementary metal oxide semiconductor (CMOS) compatible microfabrication process of piezoelectric cantilevers has been developed. The fabrication process is suitable for standard silicon technology and provides low-cost and high-throughput manufacturing. This work reports design, fabrication and characterization of piezoelectric cantilevers based on aluminum nitride (AlN) thin films synthesized at room temperature. The proposed microcantilever system is a sandwich structure composed of chromium (Cr) electrodes and a sputtered AlN film. The key issue for cantilever fabrication is the growth at room temperature of the AlN layer by reactive sputtering, making possible the innovative compatibility of piezoelectric MEMS devices with CMOS circuits already processed. AlN and Cr have been etched by inductively coupled plasma (ICP) dry etching using a BCl3Cl2Ar plasma chemistry. As part of the novelty of the post-CMOS micromachining process presented here, a silicon Si (100) wafer has been used as substrate as well as the sacrificial layer used to release the microcantilevers. In order to achieve this, the Si surface underneath the structure has been wet etched using an HNA (hydrofluoric acid + nitric acid + acetic acid) based solution. X-ray diffraction (XRD) characterization indicated the high crystalline quality of the AlN film. An atomic force microscope (AFM) has been used to determine the Cr electrode surface roughness. The morphology of the fabricated devices has been studied by scanning electron microscope (SEM). The cantilevers have been piezoelectrically actuated and their out-of-plane vibration modes were detected by vibrometry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The European chestnut (Castanea sativa Mill.) is a multipurpose species that has been widely cultivated around the Mediterranean basin since ancient times. New varieties were brought to the Iberian Peninsula during the Roman Empire, which coexist since then with native populations that survived the last glaciation. The relevance of chestnut cultivation has being steadily growing since the Middle Ages, until the rural decline of the past century put a stop to this trend. Forest fires and diseases were also major factors. Chestnut cultivation is gaining momentum again due to its economic (wood, fruits) and ecologic relevance, and represents currently an important asset in many rural areas of Europe. In this Thesis we apply different molecular tools to help improve current management strategies. For this study we have chosen El Bierzo (Castile and Leon, NW Spain), which has a centenary tradition of chestnut cultivation and management, and also presents several unique features from a genetic perspective (next paragraph). Moreover, its nuts are widely appreciated in Spain and abroad for their organoleptic properties. We have focused our experimental work on two major problems faced by breeders and the industry: the lack of a fine-grained genetic characterization and the need for new strategies to control blight disease. To characterize with sufficient detail the genetic diversity and structure of El Bierzo orchards, we analyzed DNA from 169 trees grafted for nut production covering the entire region. We also analyzed 62 nuts from all traditional varieties. El Bierzo constitutes an outstanding scenario to study chestnut genetics and the influence of human management because: (i) it is located at one extreme of the distribution area; (ii) it is a major glacial refuge for the native species; (iii) it has a long tradition of human management (since Roman times, at least); and (iv) its geographical setting ensures an unusual degree of genetic isolation. Thirteen microsatellite markers provided enough informativeness and discrimination power to genotype at the individual level. Together with an unexpected level of genetic variability, we found evidence of genetic structure, with three major gene pools giving rise to the current population. High levels of genetic differentiation between groups supported this organization. Interestingly, genetic structure does not match with spatial boundaries, suggesting that the exchange of material and cultivation practices have strongly influenced natural gene flow. The microsatellite markers selected for this study were also used to classify a set of 62 samples belonging to all traditional varieties. We identified several cases of synonymies and homonymies, evidencing the need to substitute traditional classification systems with new tools for genetic profiling. Management and conservation strategies should also benefit from these tools. The avenue of high-throughput sequencing technologies, combined with the development of bioinformatics tools, have paved the way to study transcriptomes without the need for a reference genome. We took advantage of RNA sequencing and de novo assembly tools to determine the transcriptional landscape of chestnut in response to blight disease. In addition, we have selected a set of candidate genes with high potential for developing resistant varieties via genetic engineering. Our results evidenced a deep transcriptional reprogramming upon fungal infection. The plant hormones ET and JA appear to orchestrate the defensive response. Interestingly, our results also suggest a role for auxins in modulating such response. Many transcription factors were identified in this work that interact with promoters of genes involved in disease resistance. Among these genes, we have conducted a functional characterization of a two major thaumatin-like proteins (TLP) that belongs to the PR5 family. Two genes encoding chestnut cotyledon TLPs have been previously characterized, termed CsTL1 and CsTL2. We substantiate here their protective role against blight disease for the first time, including in silico, in vitro and in vivo evidence. The synergy between TLPs and other antifungal proteins, particularly endo-p-1,3-glucanases, bolsters their interest for future control strategies based on biotechnological approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Systems biology techniques are a topic of recent interest within the neurological field. Computational intelligence (CI) addresses this holistic perspective by means of consensus or ensemble techniques ultimately capable of uncovering new and relevant findings. In this paper, we propose the application of a CI approach based on ensemble Bayesian network classifiers and multivariate feature subset selection to induce probabilistic dependences that could match or unveil biological relationships. The research focuses on the analysis of high-throughput Alzheimer's disease (AD) transcript profiling. The analysis is conducted from two perspectives. First, we compare the expression profiles of hippocampus subregion entorhinal cortex (EC) samples of AD patients and controls. Second, we use the ensemble approach to study four types of samples: EC and dentate gyrus (DG) samples from both patients and controls. Results disclose transcript interaction networks with remarkable structures and genes not directly related to AD by previous studies. The ensemble is able to identify a variety of transcripts that play key roles in other neurological pathologies. Classical statistical assessment by means of non-parametric tests confirms the relevance of the majority of the transcripts. The ensemble approach pinpoints key metabolic mechanisms that could lead to new findings in the pathogenesis and development of AD

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information reconciliation is a crucial procedure in the classical post-processing of quantum key distribution (QKD). Poor reconciliation e?ciency, revealing more information than strictly needed, may compromise the maximum attainable distance, while poor performance of the algorithm limits the practical throughput in a QKD device. Historically, reconciliation has been mainly done using close to minimal information disclosure but heavily interactive procedures, like Cascade, or using less e?cient but also less interactive ?just one message is exchanged? procedures, like the ones based in low-density parity-check (LDPC) codes. The price to pay in the LDPC case is that good e?ciency is only attained for very long codes and in a very narrow range centered around the quantum bit error rate (QBER) that the code was designed to reconcile, thus forcing to have several codes if a broad range of QBER needs to be catered for. Real world implementations of these methods are thus very demanding, either on computational or communication resources or both, to the extent that the last generation of GHz clocked QKD systems are ?nding a bottleneck in the classical part. In order to produce compact, high performance and reliable QKD systems it would be highly desirable to remove these problems. Here we analyse the use of short-length LDPC codes in the information reconciliation context using a low interactivity, blind, protocol that avoids an a priori error rate estimation. We demonstrate that 2103 bits length LDPC codes are suitable for blind reconciliation. Such codes are of high interest in practice, since they can be used for hardware implementations with very high throughput.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Session Initiation Protocol (SIP) is an application-layer control protocol standardized by the IETF for creating, modifying and terminating multimedia sessions. With the increasing use of SIP in large deployments, the current SIP design cannot handle overload effectively, which may cause SIP networks to suffer from congestion collapse under heavy offered load. This paper introduces a distributed end-to-end overload control (DEOC) mechanism, which is deployed at the edge servers of SIP networks and is easy to implement. By applying overload control closest to the source of traf?c, DEOC can keep high throughput for SIP networks even when the offered load exceeds the capacity of the network. Besides, it responds quickly to the sudden variations of the offered load and achieves good fairness. Theoretic analysis and extensive simulations verify that DEOC is effective in controlling overload of SIP networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As wafer-based solar cells become thinner, light-trapping textures for absorption enhancement will gain in importance. In this work, crystalline silicon wafers were textured with wavelength-scale diffraction grating surface textures by nanoimprint lithography using interference lithography as a mastering technology. This technique allows fine-tailored nanostructures to be realized on large areas with high throughput. Solar cell precursors were fabricated, with the surface textures on the rear side, for optical absorption measurements. Large absorption enhancements are observed in the wavelength range in which the silicon wafer absorbs weakly. It is shown experimentally that bi-periodic crossed gratings perform better than uni-periodic linear gratings. Optical simulations have been made of the fabricated structures, allowing the total absorption to be decomposed into useful absorption in the silicon and parasitic absorption in the rear reflector. Using the calculated silicon absorption, promising absorbed photocurrent density enhancements have been calculated for solar cells employing the nano-textures. Finally, first results are presented of a passivation layer deposition technique that planarizes the rear reflector for the purpose of reducing the parasitic absorption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La optimizacin de parmetros tales como el consumo de potencia, la cantidad de recursos lgicos empleados o la ocupacin de memoria ha sido siempre una de las preocupaciones principales a la hora de disear sistemas embebidos. Esto es debido a que se trata de sistemas dotados de una cantidad de recursos limitados, y que han sido tradicionalmente empleados para un propsito especfico, que permanece invariable a lo largo de toda la vida til del sistema. Sin embargo, el uso de sistemas embebidos se ha extendido a reas de aplicacin fuera de su mbito tradicional, caracterizadas por una mayor demanda computacional. As, por ejemplo, algunos de estos sistemas deben llevar a cabo un intenso procesado de seales multimedia o la transmisin de datos mediante sistemas de comunicaciones de alta capacidad. Por otra parte, las condiciones de operacin del sistema pueden variar en tiempo real. Esto sucede, por ejemplo, si su funcionamiento depende de datos medidos por el propio sistema o recibidos a travs de la red, de las demandas del usuario en cada momento, o de condiciones internas del propio dispositivo, tales como la duracin de la batera. Como consecuencia de la existencia de requisitos de operacin dinmicos es necesario ir hacia una gestin dinmica de los recursos del sistema. Si bien el software es inherentemente flexible, no ofrece una potencia computacional tan alta como el hardware. Por lo tanto, el hardware reconfigurable aparece como una solucin adecuada para tratar con mayor flexibilidad los requisitos variables dinmicamente en sistemas con alta demanda computacional. La flexibilidad y adaptabilidad del hardware requieren de dispositivos reconfigurables que permitan la modificacin de su funcionalidad bajo demanda. En esta tesis se han seleccionado las FPGAs (Field Programmable Gate Arrays) como los dispositivos ms apropiados, hoy en da, para implementar sistemas basados en hardware reconfigurable De entre todas las posibilidades existentes para explotar la capacidad de reconfiguracin de las FPGAs comerciales, se ha seleccionado la reconfiguracin dinmica y parcial. Esta tcnica consiste en substituir una parte de la lgica del dispositivo, mientras el resto contina en funcionamiento. La capacidad de reconfiguracin dinmica y parcial de las FPGAs es empleada en esta tesis para tratar con los requisitos de flexibilidad y de capacidad computacional que demandan los dispositivos embebidos. La propuesta principal de esta tesis doctoral es el uso de arquitecturas de procesamiento escalables espacialmente, que son capaces de adaptar su funcionalidad y rendimiento en tiempo real, estableciendo un compromiso entre dichos parmetros y la cantidad de lgica que ocupan en el dispositivo. A esto nos referimos con arquitecturas con huellas escalables. En particular, se propone el uso de arquitecturas altamente paralelas, modulares, regulares y con una alta localidad en sus comunicaciones, para este propsito. El tamao de dichas arquitecturas puede ser modificado mediante la adicin o eliminacin de algunos de los mdulos que las componen, tanto en una dimensin como en dos. Esta estrategia permite implementar soluciones escalables, sin tener que contar con una versin de las mismas para cada uno de los tamaos posibles de la arquitectura. De esta manera se reduce significativamente el tiempo necesario para modificar su tamao, as como la cantidad de memoria necesaria para almacenar todos los archivos de configuracin. En lugar de proponer arquitecturas para aplicaciones especficas, se ha optado por patrones de procesamiento genricos, que pueden ser ajustados para solucionar distintos problemas en el estado del arte. A este respecto, se proponen patrones basados en esquemas sistlicos, as como de tipo wavefront. Con el objeto de poder ofrecer una solucin integral, se han tratado otros aspectos relacionados con el diseo y el funcionamiento de las arquitecturas, tales como el control del proceso de reconfiguracin de la FPGA, la integracin de las arquitecturas en el resto del sistema, as como las tcnicas necesarias para su implementacin. Por lo que respecta a la implementacin, se han tratado distintos aspectos de bajo nivel dependientes del dispositivo. Algunas de las propuestas realizadas a este respecto en la presente tesis doctoral son un router que es capaz de garantizar el correcto rutado de los mdulos reconfigurables dentro del rea destinada para ellos, as como una estrategia para la comunicacin entre mdulos que no introduce ningn retardo ni necesita emplear recursos configurables del dispositivo. El flujo de diseo propuesto se ha automatizado mediante una herramienta denominada DREAMS. La herramienta se encarga de la modificacin de las netlists correspondientes a cada uno de los mdulos reconfigurables del sistema, y que han sido generadas previamente mediante herramientas comerciales. Por lo tanto, el flujo propuesto se entiende como una etapa de post-procesamiento, que adapta esas netlists a los requisitos de la reconfiguracin dinmica y parcial. Dicha modificacin la lleva a cabo la herramienta de una forma completamente automtica, por lo que la productividad del proceso de diseo aumenta de forma evidente. Para facilitar dicho proceso, se ha dotado a la herramienta de una interfaz grfica. El flujo de diseo propuesto, y la herramienta que lo soporta, tienen caractersticas especficas para abordar el diseo de las arquitecturas dinmicamente escalables propuestas en esta tesis. Entre ellas est el soporte para el realojamiento de mdulos reconfigurables en posiciones del dispositivo distintas a donde el mdulo es originalmente implementado, as como la generacin de estructuras de comunicacin compatibles con la simetra de la arquitectura. El router has sido empleado tambin en esta tesis para obtener un rutado simtrico entre nets equivalentes. Dicha posibilidad ha sido explotada para aumentar la proteccin de circuitos con altos requisitos de seguridad, frente a ataques de canal lateral, mediante la implantacin de lgica complementaria con rutado idntico. Para controlar el proceso de reconfiguracin de la FPGA, se propone en esta tesis un motor de reconfiguracin especialmente adaptado a los requisitos de las arquitecturas dinmicamente escalables. Adems de controlar el puerto de reconfiguracin, el motor de reconfiguracin ha sido dotado de la capacidad de realojar mdulos reconfigurables en posiciones arbitrarias del dispositivo, en tiempo real. De esta forma, basta con generar un nico bitstream por cada mdulo reconfigurable del sistema, independientemente de la posicin donde va a ser finalmente reconfigurado. La estrategia seguida para implementar el proceso de realojamiento de mdulos es diferente de las propuestas existentes en el estado del arte, pues consiste en la composicin de los archivos de configuracin en tiempo real. De esta forma se consigue aumentar la velocidad del proceso, mientras que se reduce la longitud de los archivos de configuracin parciales a almacenar en el sistema. El motor de reconfiguracin soporta mdulos reconfigurables con una altura menor que la altura de una regin de reloj del dispositivo. Internamente, el motor se encarga de la combinacin de los frames que describen el nuevo mdulo, con la configuracin existente en el dispositivo previamente. El escalado de las arquitecturas de procesamiento propuestas en esta tesis tambin se puede beneficiar de este mecanismo. Se ha incorporado tambin un acceso directo a una memoria externa donde se pueden almacenar bitstreams parciales. Para acelerar el proceso de reconfiguracin se ha hecho funcionar el ICAP por encima de la mxima frecuencia de reloj aconsejada por el fabricante. As, en el caso de Virtex-5, aunque la mxima frecuencia del reloj deberan ser 100 MHz, se ha conseguido hacer funcionar el puerto de reconfiguracin a frecuencias de operacin de hasta 250 MHz, incluyendo el proceso de realojamiento en tiempo real. Se ha previsto la posibilidad de portar el motor de reconfiguracin a futuras familias de FPGAs. Por otro lado, el motor de reconfiguracin se puede emplear para inyectar fallos en el propio dispositivo hardware, y as ser capaces de evaluar la tolerancia ante los mismos que ofrecen las arquitecturas reconfigurables. Los fallos son emulados mediante la generacin de archivos de configuracin a los que intencionadamente se les ha introducido un error, de forma que se modifica su funcionalidad. Con el objetivo de comprobar la validez y los beneficios de las arquitecturas propuestas en esta tesis, se han seguido dos lneas principales de aplicacin. En primer lugar, se propone su uso como parte de una plataforma adaptativa basada en hardware evolutivo, con capacidad de escalabilidad, adaptabilidad y recuperacin ante fallos. En segundo lugar, se ha desarrollado un deblocking filter escalable, adaptado a la codificacin de vdeo escalable, como ejemplo de aplicacin de las arquitecturas de tipo wavefront propuestas. El hardware evolutivo consiste en el uso de algoritmos evolutivos para disear hardware de forma autnoma, explotando la flexibilidad que ofrecen los dispositivos reconfigurables. En este caso, los elementos de procesamiento que componen la arquitectura son seleccionados de una biblioteca de elementos presintetizados, de acuerdo con las decisiones tomadas por el algoritmo evolutivo, en lugar de definir la configuracin de las mismas en tiempo de diseo. De esta manera, la configuracin del core puede cambiar cuando lo hacen las condiciones del entorno, en tiempo real, por lo que se consigue un control autnomo del proceso de reconfiguracin dinmico. As, el sistema es capaz de optimizar, de forma autnoma, su propia configuracin. El hardware evolutivo tiene una capacidad inherente de auto-reparacin. Se ha probado que las arquitecturas evolutivas propuestas en esta tesis son tolerantes ante fallos, tanto transitorios, como permanentes y acumulativos. La plataforma evolutiva se ha empleado para implementar filtros de eliminacin de ruido. La escalabilidad tambin ha sido aprovechada en esta aplicacin. Las arquitecturas evolutivas escalables permiten la adaptacin autnoma de los cores de procesamiento ante fluctuaciones en la cantidad de recursos disponibles en el sistema. Por lo tanto, constituyen un ejemplo de escalabilidad dinmica para conseguir un determinado nivel de calidad, que puede variar en tiempo real. Se han propuesto dos variantes de sistemas escalables evolutivos. El primero consiste en un nico core de procesamiento evolutivo, mientras que el segundo est formado por un nmero variable de arrays de procesamiento. La codificacin de vdeo escalable, a diferencia de los codecs no escalables, permite la decodificacin de secuencias de vdeo con diferentes niveles de calidad, de resolucin temporal o de resolucin espacial, descartando la informacin no deseada. Existen distintos algoritmos que soportan esta caracterstica. En particular, se va a emplear el estndar Scalable Video Coding (SVC), que ha sido propuesto como una extensin de H.264/AVC, ya que este ltimo es ampliamente utilizado tanto en la industria, como a nivel de investigacin. Para poder explotar toda la flexibilidad que ofrece el estndar, hay que permitir la adaptacin de las caractersticas del decodificador en tiempo real. El uso de las arquitecturas dinmicamente escalables es propuesto en esta tesis con este objetivo. El deblocking filter es un algoritmo que tiene como objetivo la mejora de la percepcin visual de la imagen reconstruida, mediante el suavizado de los "artefactos" de bloque generados en el lazo del codificador. Se trata de una de las tareas ms intensivas en procesamiento de datos de H.264/AVC y de SVC, y adems, su carga computacional es altamente dependiente del nivel de escalabilidad seleccionado en el decodificador. Por lo tanto, el deblocking filter ha sido seleccionado como prueba de concepto de la aplicacin de las arquitecturas dinmicamente escalables para la compresin de video. La arquitectura propuesta permite aadir o eliminar unidades de computacin, siguiendo un esquema de tipo wavefront. La arquitectura ha sido propuesta conjuntamente con un esquema de procesamiento en paralelo del deblocking filter a nivel de macrobloque, de tal forma que cuando se vara del tamao de la arquitectura, el orden de filtrado de los macrobloques varia de la misma manera. El patrn propuesto se basa en la divisin del procesamiento de cada macrobloque en dos etapas independientes, que se corresponden con el filtrado horizontal y vertical de los bloques dentro del macrobloque. Las principales contribuciones originales de esta tesis son las siguientes: - El uso de arquitecturas altamente regulares, modulares, paralelas y con una intensa localidad en sus comunicaciones, para implementar cores de procesamiento dinmicamente reconfigurables. - El uso de arquitecturas bidimensionales, en forma de malla, para construir arquitecturas dinmicamente escalables, con una huella escalable. De esta forma, las arquitecturas permiten establecer un compromiso entre el rea que ocupan en el dispositivo, y las prestaciones que ofrecen en cada momento. Se proponen plantillas de procesamiento genricas, de tipo sistlico o wavefront, que pueden ser adaptadas a distintos problemas de procesamiento. - Un flujo de diseo y una herramienta que lo soporta, para el diseo de sistemas reconfigurables dinmicamente, centradas en el diseo de las arquitecturas altamente paralelas, modulares y regulares propuestas en esta tesis. - Un esquema de comunicaciones entre mdulos reconfigurables que no introduce ningn retardo ni requiere el uso de recursos lgicos propios. - Un router flexible, capaz de resolver los conflictos de rutado asociados con el diseo de sistemas reconfigurables dinmicamente. - Un algoritmo de optimizacin para sistemas formados por mltiples cores escalables que optimice, mediante un algoritmo gentico, los parmetros de dicho sistema. Se basa en un modelo conocido como el problema de la mochila. - Un motor de reconfiguracin adaptado a los requisitos de las arquitecturas altamente regulares y modulares. Combina una alta velocidad de reconfiguracin, con la capacidad de realojar mdulos en tiempo real, incluyendo el soporte para la reconfiguracin de regiones que ocupan menos que una regin de reloj, as como la rplica de un mdulo reconfigurable en mltiples posiciones del dispositivo. - Un mecanismo de inyeccin de fallos que, empleando el motor de reconfiguracin del sistema, permite evaluar los efectos de fallos permanentes y transitorios en arquitecturas reconfigurables. - La demostracin de las posibilidades de las arquitecturas propuestas en esta tesis para la implementacin de sistemas de hardware evolutivos, con una alta capacidad de procesamiento de datos. - La implementacin de sistemas de hardware evolutivo escalables, que son capaces de tratar con la fluctuacin de la cantidad de recursos disponibles en el sistema, de una forma autnoma. - Una estrategia de procesamiento en paralelo para el deblocking filter compatible con los estndares H.264/AVC y SVC que reduce el nmero de ciclos de macrobloque necesarios para procesar un frame de video. - Una arquitectura dinmicamente escalable que permite la implementacin de un nuevo deblocking filter, totalmente compatible con los estndares H.264/AVC y SVC, que explota el paralelismo a nivel de macrobloque. El presente documento se organiza en siete captulos. En el primero se ofrece una introduccin al marco tecnolgico de esta tesis, especialmente centrado en la reconfiguracin dinmica y parcial de FPGAs. Tambin se motiva la necesidad de las arquitecturas dinmicamente escalables propuestas en esta tesis. En el captulo 2 se describen las arquitecturas dinmicamente escalables. Dicha descripcin incluye la mayor parte de las aportaciones a nivel arquitectural realizadas en esta tesis. Por su parte, el flujo de diseo adaptado a dichas arquitecturas se propone en el captulo 3. El motor de reconfiguracin se propone en el 4, mientras que el uso de dichas arquitecturas para implementar sistemas de hardware evolutivo se aborda en el 5. El deblocking filter escalable se describe en el 6, mientras que las conclusiones finales de esta tesis, as como la descripcin del trabajo futuro, son abordadas en el captulo 7. ABSTRACT The optimization of system parameters, such as power dissipation, the amount of hardware resources and the memory footprint, has been always a main concern when dealing with the design of resource-constrained embedded systems. This situation is even more demanding nowadays. Embedded systems cannot anymore be considered only as specific-purpose computers, designed for a particular functionality that remains unchanged during their lifetime. Differently, embedded systems are now required to deal with more demanding and complex functions, such as multimedia data processing and high-throughput connectivity. In addition, system operation may depend on external data, the user requirements or internal variables of the system, such as the battery life-time. All these conditions may vary at run-time, leading to adaptive scenarios. As a consequence of both the growing computational complexity and the existence of dynamic requirements, dynamic resource management techniques for embedded systems are needed. Software is inherently flexible, but it cannot meet the computing power offered by hardware solutions. Therefore, reconfigurable hardware emerges as a suitable technology to deal with the run-time variable requirements of complex embedded systems. Adaptive hardware requires the use of reconfigurable devices, where its functionality can be modified on demand. In this thesis, Field Programmable Gate Arrays (FPGAs) have been selected as the most appropriate commercial technology existing nowadays to implement adaptive hardware systems. There are different ways of exploiting reconfigurability in reconfigurable devices. Among them is dynamic and partial reconfiguration. This is a technique which consists in substituting part of the FPGA logic on demand, while the rest of the device continues working. The strategy followed in this thesis is to exploit the dynamic and partial reconfiguration of commercial FPGAs to deal with the flexibility and complexity demands of state-of-the-art embedded systems. The proposal of this thesis to deal with run-time variable system conditions is the use of spatially scalable processing hardware IP cores, which are able to adapt their functionality or performance at run-time, trading them off with the amount of logic resources they occupy in the device. This is referred to as a scalable footprint in the context of this thesis. The distinguishing characteristic of the proposed cores is that they rely on highly parallel, modular and regular architectures, arranged in one or two dimensions. These architectures can be scaled by means of the addition or removal of the composing blocks. This strategy avoids implementing a full version of the core for each possible size, with the corresponding benefits in terms of scaling and adaptation time, as well as bitstream storage memory requirements. Instead of providing specific-purpose architectures, generic architectural templates, which can be tuned to solve different problems, are proposed in this thesis. Architectures following both systolic and wavefront templates have been selected. Together with the proposed scalable architectural templates, other issues needed to ensure the proper design and operation of the scalable cores, such as the device reconfiguration control, the run-time management of the architecture and the implementation techniques have been also addressed in this thesis. With regard to the implementation of dynamically reconfigurable architectures, device dependent low-level details are addressed. Some of the aspects covered in this thesis are the area constrained routing for reconfigurable modules, or an inter-module communication strategy which does not introduce either extra delay or logic overhead. The system implementation, from the hardware description to the device configuration bitstream, has been fully automated by modifying the netlists corresponding to each of the system modules, which are previously generated using the vendor tools. This modification is therefore envisaged as a post-processing step. Based on these implementation proposals, a design tool called DREAMS (Dynamically Reconfigurable Embedded and Modular Systems) has been created, including a graphic user interface. The tool has specific features to cope with modular and regular architectures, including the support for module relocation and the inter-module communications scheme based on the symmetry of the architecture. The core of the tool is a custom router, which has been also exploited in this thesis to obtain symmetric routed nets, with the aim of enhancing the protection of critical reconfigurable circuits against side channel attacks. This is achieved by duplicating the logic with an exactly equal routing. In order to control the reconfiguration process of the FPGA, a Reconfiguration Engine suited to the specific requirements set by the proposed architectures was also proposed. Therefore, in addition to controlling the reconfiguration port, the Reconfiguration Engine has been enhanced with the online relocation ability, which allows employing a unique configuration bitstream for all the positions where the module may be placed in the device. Differently to the existing relocating solutions, which are based on bitstream parsers, the proposed approach is based on the online composition of bitstreams. This strategy allows increasing the speed of the process, while the length of partial bitstreams is also reduced. The height of the reconfigurable modules can be lower than the height of a clock region. The Reconfiguration Engine manages the merging process of the new and the existing configuration frames within each clock region. The process of scaling up and down the hardware cores also benefits from this technique. A direct link to an external memory where partial bitstreams can be stored has been also implemented. In order to accelerate the reconfiguration process, the ICAP has been overclocked over the speed reported by the manufacturer. In the case of Virtex-5, even though the maximum frequency of the ICAP is reported to be 100 MHz, valid operations at 250 MHz have been achieved, including the online relocation process. Portability of the reconfiguration solution to today's and probably, future FPGAs, has been also considered. The reconfiguration engine can be also used to inject faults in real hardware devices, and this way being able to evaluate the fault tolerance offered by the reconfigurable architectures. Faults are emulated by introducing partial bitstreams intentionally modified to provide erroneous functionality. To prove the validity and the benefits offered by the proposed architectures, two demonstration application lines have been envisaged. First, scalable architectures have been employed to develop an evolvable hardware platform with adaptability, fault tolerance and scalability properties. Second, they have been used to implement a scalable deblocking filter suited to scalable video coding. Evolvable Hardware is the use of evolutionary algorithms to design hardware in an autonomous way, exploiting the flexibility offered by reconfigurable devices. In this case, processing elements composing the architecture are selected from a presynthesized library of processing elements, according to the decisions taken by the algorithm, instead of being decided at design time. This way, the configuration of the array may change as run-time environmental conditions do, achieving autonomous control of the dynamic reconfiguration process. Thus, the self-optimization property is added to the native self-configurability of the dynamically scalable architectures. In addition, evolvable hardware adaptability inherently offers self-healing features. The proposal has proved to be self-tolerant, since it is able to self-recover from both transient and cumulative permanent faults. The proposed evolvable architecture has been used to implement noise removal image filters. Scalability has been also exploited in this application. Scalable evolvable hardware architectures allow the autonomous adaptation of the processing cores to a fluctuating amount of resources available in the system. Thus, it constitutes an example of the dynamic quality scalability tackled in this thesis. Two variants have been proposed. The first one consists in a single dynamically scalable evolvable core, and the second one contains a variable number of processing cores. Scalable video is a flexible approach for video compression, which offers scalability at different levels. Differently to non-scalable codecs, a scalable video bitstream can be decoded with different levels of quality, spatial or temporal resolutions, by discarding the undesired information. The interest in this technology has been fostered by the development of the Scalable Video Coding (SVC) standard, as an extension of H.264/AVC. In order to exploit all the flexibility offered by the standard, it is necessary to adapt the characteristics of the decoder to the requirements of each client during run-time. The use of dynamically scalable architectures is proposed in this thesis with this aim. The deblocking filter algorithm is the responsible of improving the visual perception of a reconstructed image, by smoothing blocking artifacts generated in the encoding loop. This is one of the most computationally intensive tasks of the standard, and furthermore, it is highly dependent on the selected scalability level in the decoder. Therefore, the deblocking filter has been selected as a proof of concept of the implementation of dynamically scalable architectures for video compression. The proposed architecture allows the run-time addition or removal of computational units working in parallel to change its level of parallelism, following a wavefront computational pattern. Scalable architecture is offered together with a scalable parallelization strategy at the macroblock level, such that when the size of the architecture changes, the macroblock filtering order is modified accordingly. The proposed pattern is based on the division of the macroblock processing into two independent stages, corresponding to the horizontal and vertical filtering of the blocks within the macroblock. The main contributions of this thesis are: - The use of highly parallel, modular, regular and local architectures to implement dynamically reconfigurable processing IP cores, for data intensive applications with flexibility requirements. - The use of two-dimensional mesh-type arrays as architectural templates to build dynamically reconfigurable IP cores, with a scalable footprint. The proposal consists in generic architectural templates, which can be tuned to solve different computational problems. A design flow and a tool targeting the design of DPR systems, focused on highly parallel, modular and local architectures. - An inter-module communication strategy, which does not introduce delay or area overhead, named Virtual Borders. - A custom and flexible router to solve the routing conflicts as well as the inter-module communication problems, appearing during the design of DPR systems. - An algorithm addressing the optimization of systems composed of multiple scalable cores, which size can be decided individually, to optimize the system parameters. It is based on a model known as the multi-dimensional multi-choice Knapsack problem. - A reconfiguration engine tailored to the requirements of highly regular and modular architectures. It combines a high reconfiguration throughput with run-time module relocation capabilities, including the support for sub-clock reconfigurable regions and the replication in multiple positions. - A fault injection mechanism which takes advantage of the system reconfiguration engine, as well as the modularity of the proposed reconfigurable architectures, to evaluate the effects of transient and permanent faults in these architectures. - The demonstration of the possibilities of the architectures proposed in this thesis to implement evolvable hardware systems, while keeping a high processing throughput. - The implementation of scalable evolvable hardware systems, which are able to adapt to the fluctuation of the amount of resources available in the system, in an autonomous way. - A parallelization strategy for the H.264/AVC and SVC deblocking filter, which reduces the number of macroblock cycles needed to process the whole frame. - A dynamically scalable architecture that permits the implementation of a novel deblocking filter module, fully compliant with the H.264/AVC and SVC standards, which exploits the macroblock level parallelism of the algorithm. This document is organized in seven chapters. In the first one, an introduction to the technology framework of this thesis, specially focused on dynamic and partial reconfiguration, is provided. The need for the dynamically scalable processing architectures proposed in this work is also motivated in this chapter. In chapter 2, dynamically scalable architectures are described. Description includes most of the architectural contributions of this work. The design flow tailored to the scalable architectures, together with the DREAMs tool provided to implement them, are described in chapter 3. The reconfiguration engine is described in chapter 4. The use of the proposed scalable archtieectures to implement evolvable hardware systems is described in chapter 5, while the scalable deblocking filter is described in chapter 6. Final conclusions of this thesis, and the description of future work, are addressed in chapter 7.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Session Initiation Protocol (SIP) has been adopted by the IETF as the control protocol for creating, modifying and terminating multimedia sessions. Overload occurs in SIP networks when SIP servers have insufficient resources to handle received messages. Under overload, SIP networks may suffer from congestion collapse due to current ineffective SIP overload control mechanisms. This paper introduces a probe-based end-to-end overload control (PEOC) mechanism, which is deployed at the edge servers of SIP networks and is easy to implement. By probing the SIP network with SIP messages, PEOC estimates the network load and controls the traffic admitted to the network according to the estimated load. Theoretic analysis and extensive simulations verify that PEOC can keep high throughput for SIP networks even when the offered load exceeds the capacity of the network. Besides, it can respond quickly to the sudden variations of the offered load and achieve good fairness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Label free immunoassay sector is a ferment of activity, experiencing rapid growth as new technologies come forward and achieve acceptance. The landscape is changing in a bottom up approach, as individual companies promote individual technologies and find a market for them. Therefore, each of the companies operating in the label-free immunoassay sector offers a technology that is in some way unique and proprietary. However, no many technologies based on Label-free technology are currently in the market for PoC and High Throughput Screening (HTS), where mature labeled technologies have taken the market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La obtencin de energa a partir de la fusin nuclear por confinamiento magntico del plasma, es uno de los principales objetivos dentro de la comunidad cientfica dedicada a la energa nuclear. Desde la construccin del primer dispositivo de fusin, hasta la actualidad, se han llevado a cabo multitud de experimentos, que hoy en da, gran parte de ellos dan soporte al proyecto International Thermonuclear Experimental Reactor (ITER). El principal problema al que se enfrenta ITER, se basa en la monitorizacin y el control del plasma. Gracias a las nuevas tecnologas, los sistemas de instrumentacin y control permiten acercarse ms a la solucin del problema, pero a su vez, es ms complicado estandarizar los sistemas de adquisicin de datos que se usan, no solo en ITER, sino en otros proyectos de igual complejidad. Desarrollar nuevas implementaciones hardware y software bajo los requisitos de los diagnsticos definidos por los cientficos, supone una gran inversin de tiempo, retrasando la ejecucin de nuevos experimentos. Por ello, la solucin que plantea esta tesis, consiste en la definicin de una metodologa de diseo que permite implementar sistemas de adquisicin de datos inteligentes y su fcil integracin en entornos de fusin para la implementacin de diagnsticos. Esta metodologa requiere del uso de los dispositivos Reconfigurable Input/Output (RIO) y Flexible RIO (FlexRIO), que son sistemas embebidos basados en tecnologa Field-Programmable Gate Array (FPGA). Para completar la metodologa de diseo, estos dispositivos van a ser soportados por un software basado en EPICS Device Support utilizando la tecnologa EPICS software asynDriver. Esta metodologa se ha evaluado implementando prototipos para los controladores rpidos de planta de ITER, tanto para casos prcticos de mbito general como adquisicin de datos e imgenes, como para casos concretos como el diagnstico del fission chamber, implementando pre-procesado en tiempo real. Adems de casos prcticos, esta metodologa se ha utilizado para implementar casos reales, como el Ion Source Hydrogen Positive (ISHP), desarrollada por el European Spallation Source (ESS Bilbao) y la Universidad del Pas Vasco. Finalmente, atendiendo a las necesidades que los experimentos en los entornos de fusin requieren, se ha diseado un mecanismo mediante el cual los sistemas de adquisicin de datos, que pueden ser implementados mediante la metodologa de diseo propuesta, pueden integrar un reloj hardware capaz de sincronizarse con el protocolo IEEE1588-V2, permitiendo a estos, obtener los TimeStamps de las muestras adquiridas con una exactitud y precisin de decenas de nanosegundos y realizar streaming de datos con TimeStamps. ABSTRACT Fusion energy reaching by means of nuclear fusion plasma confinement is one of the main goals inside nuclear energy scientific community. Since the first fusion device was built, many experiments have been carried out and now, most of them give support to the International Thermonuclear Experimental Reactor (ITER) project. The main difficulty that ITER has to overcome is the plasma monitoring and control. Due to new technologies, the instrumentation and control systems allow an approaching to the solution, but in turn, the standardization of the used data acquisition systems, not only in ITER but also in other similar projects, is more complex. To develop new hardware and software implementations under scientific diagnostics requirements, entail time costs, delaying new experiments execution. Thus, this thesis presents a solution that consists in a design methodology definition, that permits the implementation of intelligent data acquisition systems and their easy integration into fusion environments for diagnostic purposes. This methodology requires the use of Reconfigurable Input/Output (RIO) and Flexible RIO (FlexRIO) devices, based on Field-Programmable Gate Array (FPGA) embedded technology. In order to complete the design methodology, these devices are going to be supported by an EPICS Device Support software, using asynDriver technology. This methodology has been evaluated implementing ITER PXIe fast controllers prototypes, as well as data and image acquisition, so as for concrete solutions like the fission chamber diagnostic use case, using real time preprocessing. Besides of these prototypes solutions, this methodology has been applied for the implementation of real experiments like the Ion Source Hydrogen Positive (ISHP), developed by the European Spallation Source and the Basque country University. Finally, a hardware mechanism has been designed to integrate a hardware clock into RIO/FlexRIO devices, to get synchronization with the IEEE1588-V2 precision time protocol. This implementation permits to data acquisition systems implemented under the defined methodology, to timestamp all data acquired with nanoseconds accuracy, permitting high throughput timestamped data streaming.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the main limiting factors in the development of new magnesium (Mg) alloys with enhanced mechanical behavior is the need to use vast experimental campaigns for microstructure and property screening. For example, the influence of new alloying additions on the critical resolved shear stresses (CRSSs) is currently evaluated by a combination of macroscopic single-crystal experiments and crystal plasticity finite-element simulations (CPFEM). This time-consuming process could be considerably simplified by the introduction of high-throughput techniques for efficient property testing. The aim of this paper is to propose a new and fast, methodology for the estimation of the CRSSs of hexagonal close-packed metals which, moreover, requires small amounts of material. The proposed method, which combines instrumented nanoindentation and CPFEM modeling, determines CRSS values by comparison of the variation of hardness (H) for different grain orientations with the outcome of CPFEM. This novel approach has been validated in a rolled and annealed pure Mg sheet, whose H variation with grain orientation has been successfully predicted using a set of CRSSs taken from recent crystal plasticity simulations of single-crystal experiments. Moreover, the proposed methodology has been utilized to infer the effect of the alloying elements of an MN11 (Mg1% Mn1% Nd) alloy. The results support the hypothesis that selected rare earth intermetallic precipitates help to bring the CRSS values of basal and non-basal slip systems closer together, thus contributing to the reduced plastic anisotropy observed in these alloys

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El gran crecimiento de los sistemas MEMS (Micro Electro Mechanical Systems) as como su presencia en la mayora de los dispositivos que usamos diariamente despert nuestro inters. Paralelamente, la tecnologa CMOS (Complementary Metal Oxide Semiconductor) es la tecnologa ms utilizada para la fabricacin de circuitos integrados. Adems de ventajas relacionadas con el funcionamiento electrnico del dispositivo final, la integracin de sistemas MEMS en la tecnologa CMOS reduce significantemente los costes de fabricacin. Algunos de los dispositivos MEMS con mayor variedad de aplicaciones son los microflejes. Estos dispositivos pueden ser utilizados para la extraccin de energa, en microscopios de fuerza atmica o en sensores, como por ejemplo, para biodeteccin. Los materiales piezoelctricos ms comnmente utilizados en aplicaciones MEMS se sintetizan a altas temperaturas y por lo tanto no son compatibles con la tecnologa CMOS. En nuestro caso hemos usado nitruro de alumino (AlN), que se deposita a temperatura ambiente y es compatible con la tecnologa CMOS. Adems, es biocompatible, y por tanto podra formar parte de un dispositivo que acte como biosensor. A lo largo de esta tesis hemos prestado especial atencin en desarrollar un proceso de fabricacin rpido, reproducible y de bajo coste. Para ello, todos los pasos de fabricacin han sido minuciosamente optimizados. Los parmetros de sputtering para depositar el AlN, las distintas tcnicas y recetas de ataque, los materiales que actan como electrodos o las capas sacrificiales para liberar los flejes son algunos de los factores clave estudiados en este trabajo. Una vez que la fabricacin de los microflejes de AlN ha sido optimizada, fueron medidos para caracterizar sus propiedades piezoelctricas y finalmente verificar positivamente su viabilidad como dispositivos piezoelctricos. ABSTRACT The huge growth of MEMS (Micro Electro Mechanical Systems) as well as their presence in most of our daily used devices aroused our interest on them. At the same time, CMOS (Complementary Metal Oxide Semiconductor) technology is the most popular technology for integrated circuits. In addition to advantages related with the electronics operation of the final device, the integration of MEMS with CMOS technology reduces the manufacturing costs significantly. Some of the MEMS devices with a wider variety of applications are the microcantilevers. These devices can be used for energy harvesting, in an atomic force microscopes or as sensors, as for example, for biodetection. Most of the piezoelectric materials used for these MEMS applications are synthesized at high temperature and consequently are not compatible with CMOS technology. In our case we have used aluminum nitride (AlN), which is deposited at room temperature and hence fully compatible with CMOS technology. Otherwise, it is biocompatible and and can be used to compose a biosensing device. During this thesis work we have specially focused our attention in developing a high throughput, reproducible and low cost fabrication process. All the manufacturing process steps of have been thoroughly optimized in order to achieve this goal. Sputtering parameters to synthesize AlN, different techniques and etching recipes, electrode material and sacrificial layers are some of the key factors studied in this work to develop the manufacturing process. Once the AlN microcantilevers fabrication was optimized, they were measured to characterize their piezoelectric properties and to successfully check their viability as piezoelectric devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El paradigma de procesamiento de eventos CEP plantea la solucin al reto del anlisis de grandes cantidades de datos en tiempo real, como por ejemplo, monitorizacin de los valores de bolsa o el estado del trfico de carreteras. En este paradigma los eventos recibidos deben procesarse sin almacenarse debido a que el volumen de datos es demasiado elevado y a las necesidades de baja latencia. Para ello se utilizan sistemas distribuidos con una alta escalabilidad, elevado throughput y baja latencia. Este tipo de sistemas son usualmente complejos y el tiempo de aprendizaje requerido para su uso es elevado. Sin embargo, muchos de estos sistemas carecen de un lenguaje declarativo de consultas en el que expresar la computacin que se desea realizar sobre los eventos recibidos. En este trabajo se ha desarrollado un lenguaje declarativo de consultas similar a SQL y un compilador que realiza la traduccin de este lenguaje al lenguaje nativo del sistema de procesamiento masivo de eventos. El lenguaje desarrollado en este trabajo es similar a SQL, con el que se encuentran familiarizados un gran nmero de desarrolladores y por tanto aprender este lenguaje no supondra un gran esfuerzo. As el uso de este lenguaje logra reducir los errores en ejecucin de la consulta desplegada sobre el sistema distribuido al tiempo que se abstrae al programador de los detalles de este sistema.---ABSTRACT---The complex event processing paradigm CEP has become the solution for high volume data analytics which demand scalability, high throughput, and low latency. Examples of applications which use this paradigm are financial processing or traffic monitoring. A distributed system is used to achieve the performance requisites. These same requisites force the distributed system not to store the events but to process them on the fly as they are received. These distributed systems are complex systems which require a considerably long time to learn and use. The majority of such distributed systems lack a declarative language in which to express the computation to perform over incoming events. In this work, a new SQL-like declarative language and a compiler have been developed. This compiler translates this new language to the distributed system native language. Due to its similarity with SQL a vast amount of developers who are already familiar with SQL will need little time to learn this language. Thus, this language reduces the execution failures at the time the programmer no longer needs to know every single detail of the underlying distributed system to submit a query.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thermal imaging has been used to evaluate the response to drought and warm temperatures in a collection of Brachypodium distachyon lines adapted to varied environmental conditions. Thermographic records were able to separate lines from contrasting rainfall regimes. Genotypes from dryer environments showed warmer leaves under water deficit, which suggested that decreased evapotranspiration was related to a more intense stomatal closure. When irrigated and under high temperature conditions, drought-adapted lines showed cooler leaves than lines from wetter zones. The consistent, inverse thermographic response of lines to water stress and heat validates the reliability of this method to assess drought tolerance in this model cereal. It additionally supports the hypothesis that stomatal-based mechanisms are involved in natural variation for drought tolerance in Brachypodium. The study further suggests that these mechanisms are not constitutive but likely related to a more efficient closing response to avoid dehydration in adapted genotypes. Higher leaf temperature under water deficit seems a dependable criterion of drought tolerance, not only in B. distachyon but also in the main cereal crops and related grasses where thermography can facilitate high-throughput preliminary screening of tolerant materials.